Cutshort logo
Exusia logo
Ab>initio, Big Data, Informatica, Tableau, Data Architect, Cognos, Microstrategy, Healther Business Analysts, Cloud etc.
Ab>initio, Big Data, Informatica, Tableau, Data Architect, Cognos, Microstrategy, Healther Business Analysts, Cloud etc.
Exusia's logo

Ab>initio, Big Data, Informatica, Tableau, Data Architect, Cognos, Microstrategy, Healther Business Analysts, Cloud etc.

Dhaval Upadhyay's profile picture
Posted by Dhaval Upadhyay
1 - 15 yrs
₹5L - ₹10L / yr
Pune, Chicago, Hyderabad, New York
Skills
Abinitio
Cognos
Microstrategy
Business Analysts
Hadoop
Informatica PowerCenter
Tableau
Exusia, Inc. (ex-OO-see-ah: translated from Greek to mean "Immensely Powerful and Agile") was founded with the objective of addressing a growing gap in the data innovation and engineering space as the next global leader in big data, analytics, data integration and cloud computing solutions. Exusia is a multinational, delivery centric firm that provides consulting and software as a service (SaaS) solutions to leading financial, government, healthcare, telecommunications and high technology organizations facing the largest data volumes and the most complex information management requirements. Exusia was founded in the United States in 2012 with headquarters in New York City and regional US offices in Chicago, Atlanta and Los Angeles. Exusia’s international presence continues to expand and is driven from Toronto (Canada), Sao Paulo (Brazil), Johannesburg (South Africa) and Pune (India). Our mission is to empower clients to grow revenue, optimize costs and satisfy regulatory requirements through the innovative use of information and analytics. We leverage a unique blend of strategy, intellectual property, technical execution and outsourcing to enable our clients to achieve significant returns on investment for their business, data and technology initiatives. At the core of our philosophy is a quality-first, trust-building, delivery-focused client relationship. The foundation of this relationship is the talent of our team. By recruiting and retaining the best talent in the industry, we are able to deliver to clients, whose data volumes and requirements number among the largest in the world, a broad range of customized, cutting edge solutions.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About Exusia

Founded :
2012
Type :
Services
Size :
100-1000
Stage :
Profitable

About

Exusia is a multinational firm that provides consulting and software as a service solutions to leading organizations in healthcare, finance, telecommunications, consumer products, hospitality, supply chain, and high technology industries. It addresses the growing gap in the strategy and data engineering space as the next global leader in analytics, data engineering, and cloud computing solutions. Exusia is ISO 27001 certified and offers managed services to organizations facing the largest data volumes and the most complex data engineering requirements. Exusia was founded in 2012 in New York City and has its Americas headquarters in Miami, European headquarters in London, Africa headquarters in Johannesburg, and Asia headquarters in Pune. It has delivery centers in Pune, Gurugram, Chennai, Hyderabad, and Bangalore. Industries in which the company operates include healthcare, finance, telecommunications, consumer products, hospitality, supply chain, and high technology.
Read more

Connect with the team

Profile picture
Dhaval Upadhyay

Company social profiles

linkedintwitterfacebook

Similar jobs

Digital Banking Firm
Digital Banking Firm
Agency job
via Qrata by Prajakta Kulkarni
Bengaluru (Bangalore)
5 - 10 yrs
₹20L - ₹40L / yr
Apache Kafka
Hadoop
Spark
Apache Hadoop
Big Data
+5 more
Location - Bangalore (Remote for now)
 
Designation - Sr. SDE (Platform Data Science)
 
About Platform Data Science Team

The Platform Data Science team works at the intersection of data science and engineering. Domain experts develop and advance platforms, including the data platforms, machine learning platform, other platforms for Forecasting, Experimentation, Anomaly Detection, Conversational AI, Underwriting of Risk, Portfolio Management, Fraud Detection & Prevention and many more. We also are the Data Science and Analytics partners for Product and provide Behavioural Science insights across Jupiter.
 
About the role:

We’re looking for strong Software Engineers that can combine EMR, Redshift, Hadoop, Spark, Kafka, Elastic Search, Tensorflow, Pytorch and other technologies to build the next generation Data Platform, ML Platform, Experimentation Platform. If this sounds interesting we’d love to hear from you!
This role will involve designing and developing software products that impact many areas of our business. The individual in this role will have responsibility help define requirements, create software designs, implement code to these specifications, provide thorough unit and integration testing, and support products while deployed and used by our stakeholders.

Key Responsibilities:

Participate, Own & Influence in architecting & designing of systems
Collaborate with other engineers, data scientists, product managers
Build intelligent systems that drive decisions
Build systems that enable us to perform experiments and iterate quickly
Build platforms that enable scientists to train, deploy and monitor models at scale
Build analytical systems that drives better decision making
 

Required Skills:

Programming experience with at least one modern language such as Java, Scala including object-oriented design
Experience in contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems
Bachelor’s degree in Computer Science or related field
Computer Science fundamentals in object-oriented design
Computer Science fundamentals in data structures
Computer Science fundamentals in algorithm design, problem solving, and complexity analysis
Experience in databases, analytics, big data systems or business intelligence products:
Data lake, data warehouse, ETL, ML platform
Big data tech like: Hadoop, Apache Spark
Read more
Data Warehouse Architect
Data Warehouse Architect
Agency job
via The Hub by Sridevi Viswanathan
Mumbai
8 - 10 yrs
₹20L - ₹23L / yr
Data Warehouse (DWH)
ETL
Hadoop
Apache Spark
Spark
+4 more
• You will work alongside the Project Management to ensure alignment of plans with what is being
delivered.
• You will utilize your configuration management and software release experience; as well as
change management concepts to drive the success of the projects.
• You will partner with senior leaders to understand and communicate the business needs to
translate them into IT requirements. Consult with Customer’s Business Analysts on their Data
warehouse requirements
• You will assist the technical team in identification and resolution of Data Quality issues.
• You will manage small to medium-sized projects relating to the delivery of applications or
application changes.
• You will use Managed Services or 3rd party resources to meet application support requirements.
• You will interface daily with multi-functional team members within the EDW team and across the
enterprise to resolve issues.
• Recommend and advocate different approaches and designs to the requirements
• Write technical design docs
• Execute Data modelling
• Solution inputs for the presentation layer
• You will craft and generate summary, statistical, and presentation reports; as well as provide reporting and metrics for strategic initiatives.
• Performs miscellaneous job-related duties as assigned

Preferred Qualifications

• Strong interpersonal, teamwork, organizational and workload planning skills
• Strong analytical, evaluative, and problem-solving abilities as well as exceptional customer service orientation
• Ability to drive clarity of purpose and goals during release and planning activities
• Excellent organizational skills including ability to prioritize tasks efficiently with high level of attention to detail
• Excited by the opportunity to continually improve processes within a large company
• Healthcare background/ Automobile background.
• Familiarity with major big data solutions and products available in the market.
• Proven ability to drive continuous
Read more
client of Merito
client of Merito
Agency job
via Merito by Merito Talent
Mumbai
3 - 8 yrs
Best in industry
skill iconPython
SQL
Tableau
PowerBI
skill iconPHP
+2 more

Our client is the world’s largest media investment company and are a part of WPP. In fact, they are responsible for one in every three ads you see globally. We are currently looking for a Senior Software Engineer to join us. In this role, you will be responsible for coding/implementing of custom marketing applications that Tech COE builds for its customer and managing a small team of developers.

 

What your day job looks like:

  • Serve as a Subject Matter Expert on data usage – extraction, manipulation, and inputs for analytics
  • Develop data extraction and manipulation code based on business rules
  • Develop automated and manual test cases for the code written
  • Design and construct data store and procedures for their maintenance
  • Perform data extract, transform, and load activities from several data sources.
  • Develop and maintain strong relationships with stakeholders
  • Write high quality code as per prescribed standards.
  • Participate in internal projects as required

 
Minimum qualifications:

  • B. Tech./MCA or equivalent preferred
  • Excellent 3 years Hand on experience on Big data, ETL Development, Data Processing.


    What you’ll bring:

  • Strong experience in working with Snowflake, SQL, PHP/Python.
  • Strong Experience in writing complex SQLs
  • Good Communication skills
  • Good experience of working with any BI tool like Tableau, Power BI.
  • Sqoop, Spark, EMR, Hadoop/Hive are good to have.

 

 

Read more
one of the world's leading multinational investment bank
one of the world's leading multinational investment bank
Agency job
via HiyaMee by Lithin Raj
Pune
5 - 9 yrs
₹5L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more
This role is for a developer with strong core application or system programming skills in Scala, java and
good exposure to concepts and/or technology across the broader spectrum. Enterprise Risk Technology
covers a variety of existing systems and green-field projects.
A Full stack Hadoop development experience with Scala development
A Full stack Java development experience covering Core Java (including JDK 1.8) and good understanding
of design patterns.
Requirements:-
• Strong hands-on development in Java technologies.
• Strong hands-on development in Hadoop technologies like Spark, Scala and experience on Avro.
• Participation in product feature design and documentation
• Requirement break-up, ownership and implantation.
• Product BAU deliveries and Level 3 production defects fixes.
Qualifications & Experience
• Degree holder in numerate subject
• Hands on Experience on Hadoop, Spark, Scala, Impala, Avro and messaging like Kafka
• Experience across a core compiled language – Java
• Proficiency in Java related frameworks like Springs, Hibernate, JPA
• Hands on experience in JDK 1.8 and strong skillset covering Collections, Multithreading with

For internal use only
For internal use only
experience working on Distributed applications.
• Strong hands-on development track record with end-to-end development cycle involvement
• Good exposure to computational concepts
• Good communication and interpersonal skills
• Working knowledge of risk and derivatives pricing (optional)
• Proficiency in SQL (PL/SQL), data modelling.
• Understanding of Hadoop architecture and Scala program language is a good to have.
Read more
Srijan Technologies
at Srijan Technologies
6 recruiters
Srijan Technologies
Posted by Srijan Technologies
Remote only
2 - 5 yrs
₹5L - ₹15L / yr
Big Data
Apache Kafka
Hadoop
Spark
Data engineering
+3 more
Job Description:-
We are looking for a Data Engineer, responsibilities include creating machine learning models and retraining systems. To do this job successfully, you need exceptional skills in statistics and programming. If you also have knowledge of data science and software engineering, your ultimate goal will be to shape and build efficient self-learning applications.


Technical Knowledge (Must Have)

  • Strong experience in SQL / HiveQL/ AWS Athena,
  • Strong expertise in the development of data pipelines (snaplogic is preferred).
  • Design, Development, Deployment and administration of data processing applications.
  • Good Exposure towards AWS and Azure Cloud computing environments.
  • Knowledge around BigData, AWS Cloud Architecture, Best practices, Securities, Governance, Metadata Management, Data Quality etc.
  • Data extraction through various firm sources (RDBMS, Unstructured Data Sources) and load to datalake with all best practices.
  • Knowledge in Python
  • Good knowledge in NoSQL technologies (Neo4J/ MongoDB)
  • Experience/knowledge in SnapLogic (ETL Technologies)
  • Working knowledge on Unix (AIX, Linux), shell scripting
  • Experience/knowledge in Data Modeling. Database Development
  • Experience/knowledge creation of reports and dashboards in Tableau/ PowerBI
Read more
Ganit Business Solutions
at Ganit Business Solutions
3 recruiters
Vijitha VS
Posted by Vijitha VS
Remote only
4 - 7 yrs
₹10L - ₹30L / yr
skill iconScala
ETL
Informatica
Data Warehouse (DWH)
Big Data
+4 more

Job Description:

We are looking for a Big Data Engineer who have worked across the entire ETL stack. Someone who has ingested data in a batch and live stream format, transformed large volumes of daily and built Data-warehouse to store the transformed data and has integrated different visualization dashboards and applications with the data stores.    The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.

Responsibilities:

  • Develop, test, and implement data solutions based on functional / non-functional business requirements.
  • You would be required to code in Scala and PySpark daily on Cloud as well as on-prem infrastructure
  • Build Data Models to store the data in a most optimized manner
  • Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Implementing the ETL process and optimal data pipeline architecture
  • Monitoring performance and advising any necessary infrastructure changes.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Proactively identify potential production issues and recommend and implement solutions
  • Must be able to write quality code and build secure, highly available systems.
  • Create design documents that describe the functionality, capacity, architecture, and process.
  • Review peer-codes and pipelines before deploying to Production for optimization issues and code standards

Skill Sets:

  • Good understanding of optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
  • Proficient understanding of distributed computing principles
  • Experience in working with batch processing/ real-time systems using various open-source technologies like NoSQL, Spark, Pig, Hive, Apache Airflow.
  • Implemented complex projects dealing with the considerable data size (PB).
  • Optimization techniques (performance, scalability, monitoring, etc.)
  • Experience with integration of data from multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, etc.,
  • Knowledge of various ETL techniques and frameworks, such as Flume
  • Experience with various messaging systems, such as Kafka or RabbitMQ
  • Creation of DAGs for data engineering
  • Expert at Python /Scala programming, especially for data engineering/ ETL purposes

 

 

 

Read more
Cashback & Social Commerce Pioneer
Cashback & Social Commerce Pioneer
Agency job
via Unnati by Astha Bharadwaj
Delhi, Gurugram, Noida
1 - 5 yrs
₹8L - ₹15L / yr
skill iconData Analytics
Tableau
Marketing
Are you looking for a team that is built on comradeship, lets you embrace your individuality and also pays you a handsome compensation? Then read on.
 
Our client is one of the largest shopping and cashback website, that enables all its members to find products from various e-commerce sites like Amazon, Flipkart, Myntra, CliQ etc while allowing to comparing prices for similar products across these sites and also earning handsome cashback or discount coupons for every purchase. Their app and website work across Android and iOS systems, with a superior and easy User interface and experience and extremely handy features. With over 50 lakh users, they have paid over 100 crores as cashback and have expanded their offices to multiple locations as well as launched a new social commerce platform. Backed by a leading VC firm and the chairman of a top Indian conglomerate, the founders have worked their way up from scratch, serving up innovative business solutions and empowering their team in a fast-paced environment
 
As a Data Analyst- Marketing, you along with a team will be responsible for building, developing and maintaining data models, reporting systems, data automation systems and dashboards that support key business decisions.

What you will do:
 
  • Bringing data to life via Historical and Real Time Dashboard like:
1. Traffic Metrics, broken by Source, Channel, etc
2. Transaction behavior analytics
3. User level analytics
4. Propensity models and personalization models, eg whats the best product or offer to drive a sale
5. Emails/ SMS/ AN – analytics models getting data from platforms like Netcore+ Branch+ Internal Tables
6. Other reports that are relevant to know what is working, gaps, etc

  • Monitoring key metrics such as commission, gross margin, conversion, customer acquisitions etc
  • Using the data models and reports to draw actionable and meaningful insights. Based on the insights helping drive the strategy, optimization opportunities, product improvement and more
  • Demonstrating examples where data interpretation led to improvement in core business outcomes like better conversion, better ROI from Ad spends, improvements in product etc.
  • Digging into data to identify opportunities or problems and translating them into easy-to-understand way for all key business teams
  • Working closely with various business stakeholders: Marketing, Customer Insights, Growth and Product teams
  • Ensuring effective and timely delivery of reports and insights that analyze business functions and key operations and performance metrics

 


Candidate Profile:

What you need to have:

  • Minimum 2 years of data analytics and interpretation experience
  • Proven experience to show how data analytics shaped strategy, marketing and product. Should have multiple examples of this based on current and past experience
  • Strong Data Engineering skills using tools like SQL, Python, Tableau, Power BI, Advanced Excel, PowerQuery etc
  • Strong marketing acumen to be able to translate data into marketing outcomes
  • Good understanding of Google Analytics, Google AdWords, Facebook Ads, CleverTap and other analytics tools
  • Familiarity with data sources like Branch/ Clevertap / Firebase, Big Query, Internal Tables (User/ Click/ Transaction/ Events etc)    
Read more
Telecom
Telecom
Agency job
via Bhanven Cyber Tech Inc by Ramya Bhaven
Remote only
7 - 10 yrs
$0.1K - $0.1K / yr
skill iconJava
skill iconJavascript
skill iconHTML/CSS
Spring
Hibernate (Java)
+8 more
  • This job is on Contrecting role Max $70/hr 
     Java or Scala web developer with 8 to 12 years of experience and strong fundamentals/proficiency in core technologies used for web development - HTML, CSS, JavaScript, Spring and Hibernate (to include relational database experience). 

    • Object oriented analysis and design patterns using Java/J2EE technologies, 

    • Knowledge on Spring Framework, MVC architectures, ORM frameworks like Hibernate  

    • Experience with Restful Web Services, data modeling  

  • Strong experience in relational database design and development (preferably with Oracle) and understanding of NoSQL databases like HBase, Druid, Solr 

  • Experience working with event/message-based communication platforms such as Kafka, ActiveMQ etc., 

    • Experience working with Hadoop technologies and Spark framework  

    • Working proficiency in build and development tools (Maven, Gradle, Jenkins) 

    • Experience with test frameworks like JUnit, Mockito  

  • Experience in front end development using modern JavaScript frameworks and charting frameworks

Read more
PublicVibe
at PublicVibe
1 recruiter
Dhaneesha Dominic
Posted by Dhaneesha Dominic
Hyderabad
1 - 3 yrs
₹1L - ₹3L / yr
skill iconJava
skill iconData Science
skill iconPython
Natural Language Processing (NLP)
skill iconScala
+3 more
Hi Candidates, Greetings From Publicvibe !!! We are Hiring NLP Engineers/ Data scientists in between 0.6 to 2.5 Years of Experience for our Hyderabad location, if anyone looking out for opportunities or Job change, reach out to us. Regards, Dhaneesha Dominic.
Read more
FuGenX Technologies
at FuGenX Technologies
1 video
3 recruiters
Aradya S
Posted by Aradya S
Hyderabad, Bengaluru (Bangalore)
1 - 6 yrs
₹1.3L - ₹6.3L / yr
skill iconPython
skill iconMongoDB
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Artificial Intelligence (AI)
+2 more
3 or more years of experience in Python programming Good understanding of Object oriented programming concept. Should have working knowledge with flask and mongodb. At least 2 - 3years experience in Machine learning and NLP Proficient with machine learning and deep learning libraries like Pandas,Numpy,Scikit learn,Tensorflow, and Keras, Expertise into deep learning- neural networks would add advantages Proficient with Any Natural language processing library like NLTK or Spacy Hands-on technical experience of Artificial intelligence using open source programming languages and analytics software. Experience using data science techniques such as web scraping, text mining, natural language processing, machine learning, statistical modelling or image recognition Data visualisation skills, e.g. in Tableau or qlikview would add advantage.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos